Homework 5

Name:

Collaborators:

Due Friday. November 6th 2015

Question 1 : Inverting a Matrix

In this question you will code a function that computes the inverse of a non-singular Matrix. In particular you will use the Gauss-Jordan method to find the inverse.

Reduction to an Upper Triangular Matrix

An $n \times n$ matrix $A$ is invertible, if and only if $A$ is row equivalent to $I_n$, the $n \times n$ identity matrix. In this case, the row operations that reduces $A$ to $I_n$, reduces $I_n$ to $A^{-1}$.

You will find the invert of $A$ by applying the Gauss-Jordan elimination to the aumented matrix $ [A|I_n]$.
In order to reuse the code you wrote for your last homeworks, we will split the computation in two steps:

  • A downward pass of Gaussian Elimination, that will reduce $A$ to an upper triangular Matrix,
  • and a upward pass of Gaussian elimination that will reduce your augmented matrix to a row echelon form.

    We will provide you with some useful functions and scripts to test and debug your functions.


In [ ]:
function rowSwap!(A, i,j)
    # input:   A matrix
    #          i,j the row indexes to be swapped
    n = size(A)[1]
    # checking that i and j are within the permitted range
    (i > n || j > n ) && error("Index out of range in the rowSwap")
    # if the index are not the same
    if i != j 
        buffer = A[i,:]
        A[i,:] = A[j,:]
        A[j,:] = buffer
    end
end

Q1.a You will write a function that performs the same Gaussian elimination that you wrote for homework 4 but with one difference. The function takes as input:

  • A a square matrix,
  • B a matrix.

In other words, your matrix will take a matrix as a right-hand-side. Your function will create the augmented system, and perform the reduction to a upper triangular matrix. The output of the function will be the tuple (U,B1). U is the upper triangular matrix resulting from the elimination and $B1$, is the second block of the augmented matrix.
To obtain $U$ and $B1$ from your augmented matrix you perform a slicing (i.e. use [:,1:n] and [:,n+1:end]).
You will use your rowSwap function acting on the augmented matrix.
Finally, your function will raise an error if:

  • the matrix is not square,
  • the matrix is singular,
  • the dimension of the vector and matrix are not compatible.

Hints:

  • You may use the function 'hcat' to build the augmented matrix. In Julia you can type
    ? hcat
    and press enter to obtain some information of the function.
  • To find the pivot you can use the function "find()".
  • If the vector has only zero elements then the function "find()" will output an empty vector.
  • You can check that your matrix is non-invertible by checking that the output of find is empy.
  • You can easily check if an vector is empty using the "isempty()" function with a true or false output.

In [ ]:
function gaussianElimination(A,B)
    #input:   A squared matrix
    #         B a vector
    #output:  U upper triangular matrix
    #         B1 the resulting vector 
    
    # safety checks
    (n,m) = size(A)
    (n != m) && error("Matrix is not square \n")
    (n != size(B)[1]) && error("Dimension mismatch \n")
    
    # create the augmented matrix 
    M = hcat(A,B)
    # fill the details 
    # 
    #
    # nested for loops 
    #
    #
    #
    #
    
    # slicing the matrices
    U = M[:,1:n]
    B1 = M[:,n+1:end]
    return (U,B1)
end

Reduction to Row Echelon Form

Once the matrix is reduced to a upper triangular form, you will perform an upward Gaussian Elimination to obtain the row reduced echelon form.

Q1.b You will write a function that perform the second part of the Gauss-Jordan method, i.e. the reduction to reduced row echelon form .
The input of your function will be:

  • $U$: an upper triangular non-singular matrix,
  • $B$: a matrix with the same name of rows as $U$.

You will reduce the augmented matrix $U|B$ using the Gauss-Jordan method. Given that $U$ is already in upper triangular form, you need to reduce it to diagonal form. I.e. you need to eliminate the non-zero elements above the diagonal. In this case the final diagonal form will be an identity matrix.

This transformation to reduced row echelon form is equivalent to a triangular solve with mulitple right-hand-sides. The ouput of the function will be the tuple $(I,X)$. $I$ is the first block of the the augmented matrix, and it should be almost an identity matrix (up to machine precission). $X$ will be the solution to the matrix system $UX= B$. Note that in this case the solution $X$ is a matrix.

Your function needs to have safeguards against a size mismatch (i.e., the sizes of the matrices are not compatible, or your matrix $U$ is not a square matrix).

Hint: if you need to run a foor loop that goes from n-1 to 1, you can use the syntax
for j = n-1:-1:1


In [ ]:
function upwardGaussianElimination(A,B)
    #input:   U a upper triangular matrix
    #         B a Matrix
    #output:  I and almost identity matrix
    #         X  the answer to UX=B
    
    # safety checks
    (n,m) = size(A)
    (n != m) && error("Matrix is not square \n")
    (n != size(B)[1]) && error("Dimension mismatch \n")
    
    
    return (I,X)
end

You can test that your function is correct by running the following script:


In [ ]:
# size of the Matrix
m = 100
# creating an upper triangular Matrix 
U = diagm(m*ones(m,1)[:], 0) + diagm(rand(m-1,1)[:], 1) + diagm(rand(m-2,1)[:], 2)
# creating the rhs
B = rand(size(U))
@time (I,X) = upwardGaussianElimination(U,B)
print("Residual of the backward substitution is ", norm(U*X -B)/norm(B),"\n")
print("Distance to an Identity matrix is ", norm(I - eye(m)), "\n")

The residual should be extremely small (around epsilon machine)

Inverse of a Matrix

Q1.c You will write a function (very short) that finds the inverse of a Matrix $A$.
The input of your function will be :

  • A, a square matrix The output will be the its inverse $A^{-1}$.

Your function will apply the Gaussian Elimination to the augmented matrix $[A|I_n]$, and it will use the Gauss-Jordan method to find the inverse of $A$. Your function needs to check that your matrix is squared.


In [ ]:
function invert(A)
    # input:    A square matrix 
    # output:   A^{-1}
    
    # fill the details (rather short!)
    return Ainv
end

You can test your code by running the following script


In [ ]:
# size of the Matrix
m = 100
# creating the Matrix 
A = randn(m,m) + m*eye(m)
# compute the inverse
Ainv = invert(A)
# test if Ainv is the inverse 
println("Distance to the identity is ", norm(A*Ainv - eye(m)))

The residual should be extremely small (around epsilon machine)

Question 2: Solving a linear system in a stable manner

From your text book you know that in order to have a stable solver you need to perform some kind of pivoting.

Q2.a You will write a function that performs the Gaussian elimination with partial pivoting (alg 6.2 in your text book). You will use as a base the function you wrote for Homework 4, and you will modify it slightly. The function takes as input:

  • A a square matrix,
  • b a vector.

Your function will create the augmented system, and perform the Gaussian elimination. The output of the function will be the tuple (U,b1). U is the upper triangular matrix resulting from the elimination and b1, is the resulting vector.
To obtain $U$ and $b1$ from your augmented matrix you perform a slicing (i.e. use [:,1:n]).
You will use your rowSwap function acting on the augmented matrix to perform the partial pivoting.
Finally, your function will raise an error if:

  • the matrix is not square,
  • the matrix is singular,
  • the dimension of the vector and matrix are not compatible.

In [ ]:
function gaussianEliminationPartialPivoting(A,b)
    #input:   A squared matrix
    #         b a vector
    #output:  U upper triangular matrix
    #         b1 the resulting vector 
    
    # safety checks
    (n,m) = size(A)
    (n != m) && error("Matrix is not square \n")
    (n != size(b)[1]) && error("Dimension mismatch \n")
    
    # create the augmented matrix 
    M = hcat(A,b)
    #
    #
    #  fill the details!
    #
    #
    #
    
    # slicing the matrices
    U = M[:,1:n]
    b1 = M[:,n+1:end]
    return (U,b1)
end

Q2.b You will use the triangular solver you wrote in the last homework to implement a linear solver.


In [ ]:
# write (or just copy-paste ) your triangular solver here

In an analogous manner to Homework 4, you will use the Gaussian Elimination algorithm with partial pivoting and your triangular solve to obtain a more stable Linear solver.


In [ ]:
function solveGaussPivoted(A,b)
    # input:    A square matrix 
    #           b vector
    # output:   x = A\b
    
    # fill the details
    
end

You will test your function by finding the solution to $Ax =b$ for a mildly ill-conditioned matrix.


In [ ]:
# size of the Matrix
m = 100
# creating the Matrix 
A = randn(m,m)
println("The conditioning number of A is " ,cond(A))
# creating the rhs
b = rand(size(A)[1],1)
@time x = solveGaussPivoted(A,b)
print("Residual of the solver is ", norm(A*x -b)/norm(b),"\n")

Q2.c You can use the Gaussian Elimination you wrote in Q1 (without pivoting) to solve a linear system


In [ ]:
function solveGauss(A,b)
    # input:    A square matrix 
    #           b vector
    # output:   x = A\b
    (U,b1) = gaussianElimination(A,b)
    return backwardSubstitution(U,b1)
end

You will compare the pivoted versus the non pivoted solvers. By running the following script.


In [ ]:
# size of the Matrix
m = 100
# creating the Matrix 
A = randn(m,m)
# computing the conditioning number 
println("The conditioning number of A is " ,cond(A))
# creating the rhs
b = rand(size(A)[1],1)
@time xPiv = solveGaussPivoted(A,b)
@time xNonPiv = solveGauss(A,b)
print("Residual of the solution from the pivoted solver is ", norm(A*xPiv -b)/norm(b),"\n")
print("Residual of the solution from the non pivoted solver is ", norm(A*xNonPiv -b)/norm(b),"\n")

As you can see, each time you execute the script the pivoted and non-pivoted algorithms give you back an answer with different residuals. Why do you see this behavior? How does the pivoting help the stability?

Answer:


In [ ]: